3,678 research outputs found

    Information entropy and dark energy evolution

    Full text link
    The information entropy is here investigated in the context of early and late cosmology under the hypothesis that distinct phases of universe evolution are entangled between them. The approach is based on the \emph{entangled state ansatz}, representing a coarse-grained definition of primordial \emph{dark temperature} associated to an \emph{effective entangled energy density}. The dark temperature definition comes from assuming either Von Neumann or linear entropy as sources of cosmological thermodynamics. We interpret the involved information entropies by means of probabilities of forming structures during cosmic evolution. Following this recipe, we propose that quantum entropy is simply associated to the thermodynamical entropy and we investigate the consequences of our approach using the adiabatic sound speed. As byproducts, we analyze two phases of universe evolution: the late and early stages. To do so, we first recover that dark energy reduces to a pure cosmological constant, as zero-order entanglement contribution, and second that inflation is well-described by means of an effective potential. In both cases, we infer numerical limits which are compatible with current observations.Comment: 12 pages, 1 figur

    Cosmic acceleration from a single fluid description

    Full text link
    We here propose a new class of barotropic factor for matter, motivated by properties of isotropic deformations of crystalline solids. Our approach is dubbed Anton-Schmidt's equation of state and provides a non-vanishing, albeit small, pressure term for matter. The corresponding pressure is thus proportional to the logarithm of universe's volume, i.e. to the density itself since Vρ1V\propto \rho^{-1}. In the context of solid state physics, we demonstrate that by only invoking standard matter with such a property, we are able to frame the universe speed up in a suitable way, without invoking a dark energy term by hand. Our model extends a recent class of dark energy paradigms named \emph{logotropic} dark fluids and depends upon two free parameters, namely nn and BB. Within the Debye approximation, we find that nn and BB are related to the Gr\"uneisen parameter and the bulk modulus of crystals. We thus show the main differences between our model and the logotropic scenario, and we highlight the most relevant properties of our new equation of state on the background cosmology. Discussions on both kinematics and dynamics of our new model have been presented. We demonstrate that the Λ\LambdaCDM model is inside our approach, as limiting case. Comparisons with CPL parametrization have been also reported in the text. Finally, a Monte Carlo analysis on the most recent low-redshift cosmological data allowed us to place constraints on nn and BB. In particular, we found n=0.1470.107+0.113n=-0.147^{+0.113}_{-0.107} and B=3.54×103B=3.54 \times 10^{-3}.Comment: 13 pages, 7 figure

    Kinematic model-independent reconstruction of Palatini f(R)f(R) cosmology

    Full text link
    A kinematic treatment to trace out the form of f(R)f(R) cosmology, within the Palatini formalism, is discussed by only postulating the universe homogeneity and isotropy. To figure this out we build model-independent approximations of the luminosity distance through rational expansions. These approximants extend the Taylor convergence radii computed for usual cosmographic series. We thus consider both Pad\'e and the rational Chebyshev polynomials. They can be used to accurately describe the universe late-time expansion history, providing further information on the thermal properties of all effective cosmic fluids entering the energy momentum tensor of Palatini's gravity. To perform our numerical analysis, we relate the Palatini's Ricci scalar with the Hubble parameter HH and thus we write down a single differential equation in terms of the redshift zz. Therefore, to bound f(R)f(R), we make use of the most recent outcomes over the cosmographic parameters obtained from combined data surveys. In particular our clue is to select two scenarios, i.e. (2,2)(2,2) Pad\'e and (2,1)(2,1) Chebyshev approximations, since they well approximate the luminosity distance at the lowest possible order. We find that best analytical matches to the numerical solutions lead to f(R)=a+bRnf(R)=a+bR^n with free parameters given by the set (a,b,n)=(1.627,0.866,1.074)(a, b, n)=(-1.627, 0.866, 1.074) for (2,2)(2,2) Pad\'e approximation, whereas f(R)=α+βRmf(R)=\alpha+\beta R^m with (α,β,m)=(1.332,0.749,1.124)(\alpha, \beta, m)=(-1.332, 0.749, 1.124) for (2,1)(2,1) rational Chebyshev approximation. Finally, our results are compared with the Λ\LambdaCDM predictions and with previous studies in the literature. Slight departures from General Relativity are also discussed.Comment: 10 pages, 6 figures. Accepted for publication in Gen. Rel. Gra

    Impacts of fragmented accretion streams onto Classical T Tauri Stars: UV and X-ray emission lines

    Get PDF
    Context. The accretion process in Classical T Tauri Stars (CTTSs) can be studied through the analysis of some UV and X-ray emission lines which trace hot gas flows and act as diagnostics of the post-shock downfalling plasma. In the UV band, where higher spectral resolution is available, these lines are characterized by rather complex profiles whose origin is still not clear. Aims. We investigate the origin of UV and X-ray emission at impact regions of density structured (fragmented) accretion streams.We study if and how the stream fragmentation and the resulting structure of the post-shock region determine the observed profiles of UV and X-ray emission lines. Methods. We model the impact of an accretion stream consisting of a series of dense blobs onto the chromosphere of a CTTS through 2D MHD simulations. We explore different levels of stream fragmentation and accretion rates. From the model results, we synthesize C IV (1550 {\AA}) and OVIII (18.97 {\AA}) line profiles. Results. The impacts of accreting blobs onto the stellar chromosphere produce reverse shocks propagating through the blobs and shocked upflows. These upflows, in turn, hit and shock the subsequent downfalling fragments. As a result, several plasma components differing for the downfalling velocity, density, and temperature are present altoghether. The profiles of C IV doublet are characterized by two main components: one narrow and redshifted to speed \approx 50 km s1^{-1} and the other broader and consisting of subcomponents with redshift to speed in the range 200 \approx 400 km s1^{-1}. The profiles of OVIII lines appear more symmetric than C IV and are redshifted to speed \approx 150 km s1^{-1}. Conclusions. Our model predicts profiles of C IV line remarkably similar to those observed and explains their origin in a natural way as due to stream fragmentation.Comment: 11 pages, 10 figure

    Learning Relatedness Measures for Entity Linking

    Get PDF
    Entity Linking is the task of detecting, in text documents, relevant mentions to entities of a given knowledge base. To this end, entity-linking algorithms use several signals and features extracted from the input text or from the knowl- edge base. The most important of such features is entity relatedness. Indeed, we argue that these algorithms benefit from maximizing the relatedness among the relevant enti- ties selected for annotation, since this minimizes errors in disambiguating entity-linking. The definition of an e↵ective relatedness function is thus a crucial point in any entity-linking algorithm. In this paper we address the problem of learning high-quality entity relatedness functions. First, we formalize the problem of learning entity relatedness as a learning-to-rank problem. We propose a methodology to create reference datasets on the basis of manually annotated data. Finally, we show that our machine-learned entity relatedness function performs better than other relatedness functions previously proposed, and, more importantly, improves the overall performance of dif- ferent state-of-the-art entity-linking algorithms

    Extended Gravity Cosmography

    Full text link
    Cosmography can be considered as a sort of a model-independent approach to tackle the dark energy/modified gravity problem. In this review, the success and the shortcomings of the Λ\LambdaCDM model, based on General Relativity and standard model of particles, are discussed in view of the most recent observational constraints. The motivations for considering extensions and modifications of General Relativity are taken into account, with particular attention to f(R)f(R) and f(T)f(T) theories of gravity where dynamics is represented by curvature or torsion field respectively. The features of f(R)f(R) models are explored in metric and Palatini formalisms. We discuss the connection between f(R)f(R) gravity and scalar-tensor theories highlighting the role of conformal transformations in the Einstein and Jordan frames. Cosmological dynamics of f(R)f(R) models is investigated through the corresponding viability criteria. Afterwards, the equivalent formulation of General Relativity (Teleparallel Equivalent General Relativity) in terms of torsion and its extension to f(T)f(T) gravity is considered. Finally, the cosmographic method is adopted to break the degeneracy among dark energy models. A novel approach, built upon rational Pad\'e and Chebyshev polynomials, is proposed to overcome limits of standard cosmography based on Taylor expansion. The approach provides accurate model-independent approximations of the Hubble flow. Numerical analyses, based on Monte Carlo Markov Chain integration of cosmic data, are presented to bound coefficients of the cosmographic series. These techniques are thus applied to reconstruct f(R)f(R) and f(T)f(T) functions and to frame the late-time expansion history of the universe with no \emph{a priori} assumptions on its equation of state. A comparison between the Λ\LambdaCDM cosmological model with f(R)f(R) and f(T)f(T) models is reported.Comment: 82 pages, 35 figures. Accepted for publication in IJMP

    Cosmographic analysis with Chebyshev polynomials

    Full text link
    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parameterize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Pad\'e series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Pad\'e approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the JLA supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.Comment: 17 pages, 6 figures, 5 table

    Accretion disk coronae of Intermediate Polar Cataclysmic Variables - 3D MagnetoHydro-Dynamic modeling and thermal X-ray emission

    Get PDF
    IPCVs contain a magnetic, rotating white dwarf surrounded by a magnetically truncated accretion disk. To explain their strong flickering X-ray emission, accretion has been successfully taken into account. Nevertheless, observations suggest that accretion phenomena could not be the only process behind it. An intense flaring activity occurring on the surface of the disk may generate a corona, contribute to the thermal X-ray emission and influence the system stability. Our purposes are: investigating the formation of an extended corona above the accretion disk, due to an intense flaring activity occurring on the disk surface; studying its effects on the disk and stellar magnetosphere; assessing its contribution to the observed X-ray flux. We have developed a 3D MHD model of a IPCV. The model takes into account gravity, disk viscosity, thermal conduction, radiative losses and coronal flare heating. To perform a parameter space exploration, several system conditions have been considered, with different magnetic field intensity and disk density values. From the results of the evolution of the model, we have synthesized the thermal X-ray emission. The simulations show the formation of an extended corona, linking disk and star. The flaring activity is capable of strongly influencing the disk configuration and its stability, effectively deforming the magnetic field lines. Hot plasma evaporation phenomena occur in the layer immediately above the disk. The flaring activity gives rise to a thermal X-ray emission in both the [0.1-2.0] keV and the [2.0-10] keV bands. An intense coronal activity occurring on the disk surface of an IPCV can affect the structure of the disk depending noticeably on the density of the disk and the magnetic field of the central object. Moreover, the synthesis of the thermal X-ray fluxes shows that this flaring activity may contribute to the observed thermal X-ray emission
    corecore